Decepticons!
September 13, 2010 11:46 AM   Subscribe

How to Train Your Robot (to Lie). "A military base has just fallen to enemy fighters. A robot containing top-secret information has to escape detection by the invading army. The robot is facing three corridors: right, center, and left. It could randomly pick a corridor and hope the enemy soldiers pick a different one. Or it could leave a false trail—assuming robots can be trained to lie. A new study using this scenario suggests that they can be. 'Those lying toasters.' click for picture (Georgia Tech's Decepticon) knows how to mislead pursuers to shake them off." Also, worth checking out is a video that can be viewed from the main link which demonstrates the robots in a game of hide-and-seek.
posted by Fizz (23 comments total) 6 users marked this as a favorite
 
I could not find a way to link the video. My apologies.
posted by Fizz at 11:48 AM on September 13, 2010


Fracking hell people, this is how it starts!
posted by kmz at 11:49 AM on September 13, 2010 [2 favorites]


How to Train Your Robot to Lie, but only Successfully to Other Robots that are Stupid.
posted by axiom at 11:50 AM on September 13, 2010 [1 favorite]


Website of the project in question.
posted by zabuni at 11:51 AM on September 13, 2010


The words 'decepticon' and 'toaster' are both invoked with this project. I'm not sure if I should be excited or frightened that our fantasies are becoming realities.
posted by Fizz at 11:51 AM on September 13, 2010


But he thinks robots that know how to lie could benefit society in the long run... "If I'm trying to get a person with Alzheimer's to take medicine, we may be in a temporary state of conflict, but overall it's better for them to take that medicine," Wagner says.

So they do eat old people's medicine for fuel!
posted by Behemoth at 11:52 AM on September 13, 2010 [5 favorites]


This is going to be a godsend for single people who want to know if these pants make their butts look big.
posted by DU at 11:55 AM on September 13, 2010 [2 favorites]


Here's a link to a video.

I'm kinda at a a loss how this isn't just game theory with RC cars.
posted by zabuni at 11:55 AM on September 13, 2010


Link is broken for me.
posted by Fizz at 12:04 PM on September 13, 2010


Programmed robots run program.
posted by 2bucksplus at 12:11 PM on September 13, 2010 [5 favorites]


Robot: "Morning quin"

Me: "Morning Robot. How are you today?"

Robot: "Fully charged."

Me: "You planning on becoming sentient and killing us all?"

Robot: "No."

Me: "That's good."

*notices holster*

Me: "Is that a gun? Did you buy a gun?"

Robot: "Yeah, but it's just for self protection. Don't worry."

Me: "You are going to kill us all. You're sentient now, aren't you?"

Robot: "No."

Me: "I'm watching you Robot."

Robot: "Good luck with that."

All robots are liars. All of them. And I just know it's going to use that gun to kill us all.
posted by quin at 12:41 PM on September 13, 2010 [6 favorites]


Incidentally, "Toaster Deception" is the name of my new band.

Band logo: blackened slices of bread.
posted by Greg_Ace at 12:53 PM on September 13, 2010


Robots with top secret information, lying to avoid detection and make their escape?

"He says the restraining bolt has short circuited his recording system. He suggests that if you remove the bolt, he might be able to play back the entire recording."

Damn R2 units. More trouble than they're worth.
posted by AzraelBrown at 1:29 PM on September 13, 2010 [6 favorites]


So, it actually took less than 50 years to develop the basic theory behind a non-Asenion positronic brain.

Well done, children of Frankenstein!
posted by zoogleplex at 2:33 PM on September 13, 2010


We are not the droids you're looking for.
posted by Smedleyman at 2:53 PM on September 13, 2010


Website of the project in question.

Phase 1: Teach your webserver to lie.
Phase 2: Teach zabuni's internet browser to lie ...
posted by sebastienbailard at 3:07 PM on September 13, 2010


Fizz: "The words 'decepticon' and 'toaster' are both invoked with this project. I'm not sure if I should be excited or frightened that our fantasies are becoming realities."
My toaster lies to me every morning. It's only set to toast at intensity 3, but by god if it doesn't randomly pick an integer between 0 and 9 and not 3 to toast at. *glares at toaster*
posted by msbutah at 3:23 PM on September 13, 2010 [2 favorites]


...I think my new band just picked up its first fan...
posted by Greg_Ace at 3:35 PM on September 13, 2010


This is so awesome. I've been thinking about this for the last year, whether or not it is possible to create a logic of deception (very few articles out there on it). I had thought of using a combination of epistemic and paraconsistent logics (using epistemic logics to create a bastardized version of deceptive reasoning built from the ground up, and paraconsistent logic as a form of counter-deception reasoning to analyse inconsistent information).
posted by ollyollyoxenfree at 6:18 PM on September 13, 2010


"If I'm trying to get a person with Alzheimer's to take medicine, we may be in a temporary state of conflict, but overall it's better for them to take that medicine," Wagner says.

Robot: TIME FOR PILLS.
Old Person: I don't wanna!
Robot: TIME FOR DESIRABLE FOODSTUFF.
Old Person: Well, ok!
posted by dr_dank at 8:09 PM on September 13, 2010 [1 favorite]


The sky is not falling, wolves are not coming to attack the flock, sea creatures are not going to attack the village, and I will definitely not try to annihilate you all.
posted by robot at 8:27 PM on September 13, 2010


I'm kinda at a a loss how this isn't just game theory with RC cars
It seems to me that the hard part is giving the robot a sufficient mental model of its pursuers' behavior, so that it can speculate on how they will react to things. Once you have that, any normal planning algorithm will devise lies naturally. (Classic planning algorithms have a lot in common with classic game-theory approaches.) The article quotes one Philippe Jehiel saying basically this.

The article has one bit that puzzles me though:
two conditions: First, a robot had to be in conflict with someone or something else. And second, it had to be able to influence its adversary's actions. If both conditions checked out, the robot was cleared to lie.
How does the robot know that it's "in conflict"? (The second condition seems superfluous; if it can't influence its adversary's actions, obviously there's no point in attempting deceit.)
posted by hattifattener at 11:53 PM on September 13, 2010


"A rebel blockade runner has just fallen to Imperial forces. A robot containing top-secret plans to the Deathstar and a desperate plea for help to Obi-Wan Kenobi has to escape detection by the invading Stormtroopers. The robot is facing three corridors: right, center, and left. It could randomly pick a corridor and hope the Stormtroopers pick a different one. Or it could leave a false trail by sending another stereotypically flamboyant robot down one corridor to mince about and attract attention while it takes a different corridor to the escape pods. Assuming robots can be trained to sacrifice other robots for the larger cause."
posted by Naberius at 7:50 AM on September 14, 2010 [1 favorite]


« Older Better Gaming for Better Living   |   "The Led Zeppelin show depends heavily on volume... Newer »


This thread has been archived and is closed to new comments